253 research outputs found

    Quantitative Isoperimetric Inequalities on the Real Line

    Get PDF
    In a recent paper A. Cianchi, N. Fusco, F. Maggi, and A. Pratelli have shown that, in the Gauss space, a set of given measure and almost minimal Gauss boundary measure is necessarily close to be a half-space. Using only geometric tools, we extend their result to all symmetric log-concave measures \mu on the real line. We give sharp quantitative isoperimetric inequalities and prove that among sets of given measure and given asymmetry (distance to half line, i.e. distance to sets of minimal perimeter), the intervals or complements of intervals have minimal perimeter.Comment: 14 pages, 3 figure

    Exact Reconstruction using Beurling Minimal Extrapolation

    Full text link
    We show that measures with finite support on the real line are the unique solution to an algorithm, named generalized minimal extrapolation, involving only a finite number of generalized moments (which encompass the standard moments, the Laplace transform, the Stieltjes transformation, etc). Generalized minimal extrapolation shares related geometric properties with basis pursuit of Chen, Donoho and Saunders [CDS98]. Indeed we also extend some standard results of compressed sensing (the dual polynomial, the nullspace property) to the signed measure framework. We express exact reconstruction in terms of a simple interpolation problem. We prove that every nonnegative measure, supported by a set containing s points,can be exactly recovered from only 2s + 1 generalized moments. This result leads to a new construction of deterministic sensing matrices for compressed sensing.Comment: 27 pages, 3 figures version 2 : minor changes and new titl

    Non-uniform spline recovery from small degree polynomial approximation

    Get PDF
    We investigate the sparse spikes deconvolution problem onto spaces of algebraic polynomials. Our framework encompasses the measure reconstruction problem from a combination of noiseless and noisy moment measurements. We study a TV-norm regularization procedure to localize the support and estimate the weights of a target discrete measure in this frame. Furthermore, we derive quantitative bounds on the support recovery and the amplitudes errors under a Chebyshev-type minimal separation condition on its support. Incidentally, we study the localization of the knots of non-uniform splines when a Gaussian perturbation of their inner-products with a known polynomial basis is observed (i.e. a small degree polynomial approximation is known) and the boundary conditions are known. We prove that the knots can be recovered in a grid-free manner using semidefinite programming

    Randomized pick-freeze for sparse Sobol indices estimation in high dimension

    Get PDF
    This article investigates a new procedure to estimate the influence of each variable of a given function defined on a high-dimensional space. More precisely, we are concerned with describing a function of a large number pp of parameters that depends only on a small number ss of them. Our proposed method is an unconstrained 1\ell_{1}-minimization based on the Sobol's method. We prove that, with only O(slogp)\mathcal O(s\log p) evaluations of ff, one can find which are the relevant parameters

    Constructions déterministes pour la régression parcimonieuse

    Get PDF
    Dans cette thèse nous étudions certains designs déterministes pour la régression par-cimonieuse. Notre problématique est largement inspirée du " Compressed Sensing " où l'on cherche à acquérir et compresser simultanément un signal de grande taille à partir d'un petit nombre de mesures linéaires. Plus précisément, nous faisons le lien entre l'erreur d'estimation et l'erreur de prédiction des estimateurs classiques (lasso, sélecteur Dantzig et basis pursuit) et la distorsion (qui mesure l'" écart " entre la norme 1 et la norme Euclidienne) du noyau du design considéré. Notre étude montre que toute construction de sous-espaces de faibles distorsions (appelés sous-espaces " presque " Euclidiens) conduit à de " bons " designs. Dans un second temps, nous nous intéressons aux designs construits à partir de graphes expanseurs déséquilibrés. Nous en établissons de manière précise les performances en termes d'erreur d'estimation et d'erreur de prédiction. Enfin, nous traitons la reconstruction exacte de mesures signées sur la droite réelle. Nous démontrons que tout système de Vandermonde généralisé permet la reconstruction fidèle de n'importe quel vecteur parcimonieux à partir d'un très faible nombre d'observations. Dans une partie indépendante, nous étudions la stabilité de l'inégalité isopérimétrique sur la droite réelle pour des mesures log-concaves.In this thesis we investigate some deterministic designs for the sparse regression. Our issue is mainly inspired by Compressed Sensing which is concerned by simultane-ously acquire and compress a signal of large size from a small number of linear measurements. More precisely, we show that there exists a link between variable selection and prediction error with standard estimators (such as the lasso, the Dantzig selector, the basis pursuit) and the distortion, which measures how "far" is the Manhattan norm from the Euclidean norm, of the null-space of the design. Hence, we show that every construction of subspaces with low-distortion (called "almost" Euclidean subspaces) gives "good" designs. In a second part, we are interested by designs constructed from un balanced expander graphs. We accurately established their performances in terms of variable selection and prediction error. Finally, we are interested in the faithful reconstruction of signed measures on the real line. We show that every generalized Vander monde system gives design such one can exactly recover all the sparse vectors from a dramatically small number of observations. In an independent part, we investigate the stability of the isoperimetric inequalities for the log-concave measures on the real line

    Power of the Spacing test for Least-Angle Regression

    Full text link
    Recent advances in Post-Selection Inference have shown that conditional testing is relevant and tractable in high-dimensions. In the Gaussian linear model, further works have derived unconditional test statistics such as the Kac-Rice Pivot for general penalized problems. In order to test the global null, a prominent offspring of this breakthrough is the spacing test that accounts the relative separation between the first two knots of the celebrated least-angle regression (LARS) algorithm. However, no results have been shown regarding the distribution of these test statistics under the alternative. For the first time, this paper addresses this important issue for the spacing test and shows that it is unconditionally unbiased. Furthermore, we provide the first extension of the spacing test to the frame of unknown noise variance. More precisely, we investigate the power of the spacing test for LARS and prove that it is unbiased: its power is always greater or equal to the significance level α\alpha. In particular, we describe the power of this test under various scenarii: we prove that its rejection region is optimal when the predictors are orthogonal; as the level α\alpha goes to zero, we show that the probability of getting a true positive is much greater than α\alpha; and we give a detailed description of its power in the case of two predictors. Moreover, we numerically investigate a comparison between the spacing test for LARS and the Pearson's chi-squared test (goodness of fit).Comment: 22 pages, 8 figure

    Consistent estimation of the filtering and marginal smoothing distributions in nonparametric hidden Markov models

    Full text link
    In this paper, we consider the filtering and smoothing recursions in nonparametric finite state space hidden Markov models (HMMs) when the parameters of the model are unknown and replaced by estimators. We provide an explicit and time uniform control of the filtering and smoothing errors in total variation norm as a function of the parameter estimation errors. We prove that the risk for the filtering and smoothing errors may be uniformly upper bounded by the risk of the estimators. It has been proved very recently that statistical inference for finite state space nonparametric HMMs is possible. We study how the recent spectral methods developed in the parametric setting may be extended to the nonparametric framework and we give explicit upper bounds for the L2-risk of the nonparametric spectral estimators. When the observation space is compact, this provides explicit rates for the filtering and smoothing errors in total variation norm. The performance of the spectral method is assessed with simulated data for both the estimation of the (nonparametric) conditional distribution of the observations and the estimation of the marginal smoothing distributions.Comment: 27 pages, 2 figures. arXiv admin note: text overlap with arXiv:1501.0478
    corecore